-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tailcut finder function and script #1325
Conversation
Just based on mean pedestal charge in interleaved pedestals, and using tabulated data (and a parametrization of NSB level vs <ped charge>)
Improved docstrings
@marialainez, @morcuended this is still a draft (I am testing it), but it would be good if you can already comment on the interface, since this is mainly to use by lstosa |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1325 +/- ##
==========================================
- Coverage 73.52% 72.94% -0.58%
==========================================
Files 134 135 +1
Lines 14215 14350 +135
==========================================
+ Hits 10451 10468 +17
- Misses 3764 3882 +118 ☔ View full report in Codecov by Sentry. |
…itional NSB needed in the MC. Previous ones were based on the mean pedestal charges, with no exclusion of outliers (pixels or subruns). New ones are based on the medians computed in the current code (& excluding outliers)
…with cleaning settings. This is now doe by the new script lstchain_find_tailcuts.py
pixels, but truncating the charge distribution of all pixels (hence biasing the result)
(to those obtained with the proper pixel outlier exclusion in MC)
…e, which characterizes better the right-side tail of the distribution (more relevant for the analysis)
This is now ready for review; performance results and some further explanations are shown in #1330 |
Proper error message if calculation failed
To avoid confusion, we set the "tailcut" setting in the json file (which is not used later) to {} (empty), rather than the default 8,4 (picture, boundary). The default appeared in the attrs of the resulting DL1 file, which may lead to confusion. The actually used settings are those in tailcuts_clean_with_pedestal_threshold
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall. Just some minor comments regarding docstrings. And also if the script is calculating the additional NSB, I'd indicate it in the name of the script.
Set in the script calling this function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My last concern is how to deal with edge cases like very short runs for which this script will fail. But that's more of an issue for OSA.
The parameters are calculated for a whole run (not a subrun), so it really has to be a very short run, < 1 minute or so. That usually means something went very wrong, and data are not usable. |
These very short runs (<5 subruns) are created now and then as leftovers when observations on a given target are finishing (I guess due to data acquisition configuration). Their case should be considered by OSA, like simply excluding them from the analysis from the beginning. Not really a problem with this script. |
Find adequate cleaning levels (and MC NSB tuning settings) to process a given run, just based on the pixel charges extracted from interleaved pedestal events. See #1330 for some tests of performance.